Model order reduction methods applied to neural network training

نویسندگان

چکیده

Abstract Neural networks have emerged as powerful and versatile tools in the field of deep learning. As complexity task increases, so do size architectural network, causing compression techniques to become a focus current research. Parameter truncation can provide significant reduction memory computational complexity. Originating from model order framework, Discrete Empirical Interpolation Method is applied gradient descent training neural analyze for important parameters. The approach various state‐of‐the‐art compared established methods. Further metrics like L 2 Cross‐Entropy Loss, well accuracy rate are reported.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Neural Network Training with Second Order Algorithms

Second order algorithms are very efficient for neural network training because of their fast convergence. In traditional Implementations of second order algorithms [Hagan and Menhaj 1994], Jacobian matrix is calculated and stored, which may cause memory limitation problems when training large-sized patterns. In this paper, the proposed computation is introduced to solve the memory limitation pr...

متن کامل

Assessment Methods of Neural Network Classification Applied to Otoneurological Data

Neural networks are applied, among other things, to classify medical data to various disease classes. It is a great advance if data relevance and complicated relations between input data and output values (classifications) can thoroughly be analyzed. Nevertheless, basic techniques for neural network algorithms commonly disallow such chances. We developed methods to analyze such relevance and re...

متن کامل

Trajectory Methods for Neural Network Training

A new class of methods for training multilayer feedforward neural networks is proposed. The proposed class of methods draws from methods for solving initial value problems of ordinary differential equations, and belong to the subclass of trajectory methods. The training of a multilayer feedforward neural network is equivalent to the minimization of the network’s error function with respect to t...

متن کامل

Global Search Methods for Neural Network Training

In many cases the supervised neural network training using a backpropagation based learning rule can be trapped in a local minimum of the error function. These training algorithms are local minimization methods and have no mechanism that allows them to escape the in uence of a local minimum. The existence of local minima is due to the fact that the error function is the superposition of nonline...

متن کامل

Single-Model-Bootstrap Applied to Neural Network Rainfall-Runoff Forecasting

Most neural network hydrological modelling has used split-sample validation to ensure good out-of-sample generalisation and thus safeguard each potential solution against the danger of overfitting. However, given that each sub-set is required to provide a comprehensive and sufficient representation of both environmental inputs and hydrological processes, then to partition the data could create ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings in applied mathematics & mechanics

سال: 2023

ISSN: ['1617-7061']

DOI: https://doi.org/10.1002/pamm.202300078